In this paper we consider the impact of trust on a new visitor's intention to revisit a website, but instead of using the typical expectancy-value theories as our conceptual basis, we look at the issue from the perspective of cognitive complexity and "humans as cognitive misers." Starting with the suggestion that it is cognitively taxing to distrust, we propose that in order to conserve on cognitive resources, once a new visitor has convinced him or herself that a website is "trustworthy enough," that user will drop trustworthiness from their concerns and only consider other characteristics of the website (e.g., task-technology fit, aesthetic appeal, etc.) in determining their revisit intention. This leads to what we call a "trust tipping point" and two different worlds of trust. Above the tipping point revisit intention is constructed in one way, and below the trust tipping point it is constructed in a quite different way. This perspective results in very different recommendations for website designers as to the likely payoff from improving task-technology fit, aesthetic appeal, or trustworthiness, depending upon where their existing website stands relative to the trust tipping point. To test our hypotheses we used data from 314 student website users, and expanded a technique called piecewise regression (Neter et al. Applied Linear Statistical Models, 4th ed.) to allow us to analyze data as two different linear surfaces, joined at the tipping point. We found good support for our assertions that users operate differently above and below a trust tipping point.
There is a pervasive belief in the MIS research community that PLS has advantages over other techniques when analyzing small sample sizes or data with non-normal distributions. Based on these beliefs, major MIS journals have published studies using PLS with sample sizes that would be deemed unacceptably small if used with other statistical techniques. We used Monte Carlo simulation more extensively than previous research to evaluate PLS, multiple regression, and LISREL in terms of accuracy and statistical power under varying conditions of sample size, normality of the data, number of indicators per construct, reliability of the indicators, and complexity of the research model. We found that PLS performed as effectively as the other techniques in detecting actual paths, and not falsely detecting non-existent paths. However, because PLS (like regression) apparently does not compensate for measurement error, PLS and regression were consistently less accurate than LISREL. When used with small sample sizes, PLS, like the other techniques, suffers from increased standard deviations, decreased statistical power,and reduced accuracy. All three techniques were remarkably robust against moderate departures from normality, and equally so. In total, we found that the similarities in results across the three techniques were much stronger than the differences.
In the Foreword to an MIS Quarterly Special Issue on PLS, the senior editors for the special issue noted that they rejected a number of papers because the authors attempted comparisons between results from PLS, multiple regression, and structural equation modeling (Marcoulides et al. 2009). They raised several issues they argued had to be taken into account to have legitimate comparison studies, supporting their position primarily by citing three authors: Dijkstra (1983), McDonald(1996), and Schneeweiss (1993). As researchers interested in conducting comparison studies, we read the Foreword carefully, but found it did not provide clear guidance on how to conduct "legitimate" comparisons. Nor did our reading of Dijksta, McDonald, and Schneeweiss raise any red flags about dangers in this kind of comparison research. We were concerned that instead of helping researchers to successfully engage in comparison research, the Foreword might end up discouraging that type of work, and might even be used incorrectly to reject legitimate comparison studies. This Issues and Opinions piece addresses the question of why one might conduct comparison studies, and gives an overview of the process of comparison research with a focus on what is required to make those comparisons legitimate. In addition, we explicitly address the issues raised by Marcoulides et al., to explore where they might (or might not) come into play when conducting or evaluating this type of study.
We present a model of the organizational impacts of enterprise resource planning (ERP) systems once the system has gone live and the "shake-out" phase has occurred. Organizational information processing theory states that performance is influenced by the level of fit between information processing mechanisms and organizational context. Two important elements of this context are interdependence and differentiation among sub-units of the organization. Because ERP systems include data and process integration, the theory suggests that ERP will be a relatively better fit when interdependence is high and differentiation is low. Our model focuses at the subunit level of the organization (business function or location, such as a manufacturing plant) and includes intermediate benefits through which ERP's overall subunit impact occurs (in our case at the plant level). ERP customization and the amount of time since ERP implementation are also included in the model. The resulting causal model is tested using a questionnaire survey of 111 manufacturing plants. The data support the key assertions in the model.
From 1990 through 1998, First American Corporation (FAC) changed its corporate strategy from a traditional banking approach to a customer relationship-oriented strategy that placed FAC's customers at the center of all aspects of the company s operations. The transformation made FAC an innovative leader in the financial services industry This case study describes FAC's transformation and the way in which a data warehouse called VISION helped make it happen. FAC's experiences suggest lessons for managers who plan to use technology to support changes that are designed to significantly improve organizational performance. In addition, they raise interesting questions about the means by which information technology can be used to gain competitive advantage.
There is strong evidence that data items stored in organizational databases have a significant rate of errors. If undetected in uses those errors in stored data may significantly affect business outcomes. Published research suggests that users of information systems tend to be ineffective in detecting data errors. However, in this paper it is argued that, rather than accepting poor human error detection performance, MIS researchers need to develop better theories of human error detection and to improve their understanding of the conditions for improving performance. This paper applies several theory bases (primarily signal detection theory but also a theory of individual task performance, theories of effort and accuracy in decision making, and theories of goals and incentives) to develop a set of propositions about successful human error detection. These propositions are tested in a laboratory setting. The results present a strong challenge to earlier assertions that humans are poor detectors of data errors. The findings of the two laboratory experiments show that explicit error detection goals and incentives can modify error detection performance. These findings provide an improved understanding of conditions under which users detect data errors. They indicate it is possible to influence detection behavior in organizational settings through managerial directives, training, and incentives.
A key concern in Information Systems (IS) research has been to better understand the linkage between information systems and individual performance. The research reported in this study has two primary objectives: (1) to propose a comprehensive theoretical model that incorporates valuable insights from two complementary streams of research, and (2) to empirically test the core of the model. At the heart of the new model is the assertion that for an information technology to have a positive impact on individual performance, the technology: (1) must be utilized and (2) must be a good fit with the tasks it supports. This new mode! is moderately supported by an analysis of data from over 600 individuals in two companies. This research high-lights the importance of the fit between technologies and users' tasks in achieving individual performance impacts from information technology. It also suggests that task-technology fit, when decomposed into its more detailed components, could be the basis for a strong diagnostic tool to evaluate whether information systems and services in a given organization are meeting user needs.
In spite of strong conceptual arguments for the value of strategic data planning as a means to increase data integration in large organizations, empirical research has found more evidence of problems than of success. In this paper, four detailed case studies of SDP efforts, along with summaries of five previously reported efforts, are analyzed. Fifteen specific propositions are offered, with two overall conclusions. The first conclusion is that SDP, though conceived of as a generally appropriate method, may not be the best planning approach in all situations. The second conclusion is that the SDP method of analyzing business functions and their data requirements may not be the best way to develop a "data architecture," given the required level of commitment of talented individuals, the cost, the potential errors, and the high level of abstraction of the result. These lessons can aid practitioners in deciding when to use SDP and guide them as they begin the process of rethinking and modifying the SDP to be more effective.
For many organizations, the ability to make coordinated, organization-wide responses to today's business problems is thwarted by the lack of data integration or commonly defined data elements and codes across different information systems. Though many researchers and practitioners have implicitly assumed that data integration always results in net benefits to an organization, this article questions that view. Based on theories of organizational information processing, a model of the impact of data integration is developed that includes gains in organization-wide coordination and organization-wide decision making, as well as losses in local autonomy and flexibility, and changes in system design and implementation costs. The importance of each of these impacts is defended by theoretical arguments and illustrated by case examples. This model suggests that the benefits of data integration will outweigh costs only under certain situations, and probably not for all the data the organization uses. Therefore, MIS researchers and practitioners should consider the need for better conceptualization and methods for implementing "partial integration" in organizations.
Today, corporations are placing increasing emphasis on the management of data. To learn more about effective approaches to "managing the data resource," case studies of 31 data management efforts in 20 diverse firms have been conducted. The major finding is that there is no single, dominant approach to improving the management of data. Rather, firms have adopted multiple approaches that appear to be very diverse in (1) business objective, (2) organizational scope, (3) planning method, and (4) "product," i.e., deliverable produced. The dominant business objective for successful action is improved managerial information; most data management efforts are "targeted" without a formal data planning process; and the dominant product was "information databases." In addition, several key organizational issues must be addressed when undertaking any data management effort.